home *** CD-ROM | disk | FTP | other *** search
- Path: keats.ugrad.cs.ubc.ca!not-for-mail
- From: c2a192@ugrad.cs.ubc.ca (Kazimir Kylheku)
- Newsgroups: comp.lang.c
- Subject: Re: What will happen close to 640K limit ?
- Date: 2 Feb 1996 13:27:39 -0800
- Organization: Computer Science, University of B.C., Vancouver, B.C., Canada
- Message-ID: <4etvkbINN8dg@keats.ugrad.cs.ubc.ca>
- References: <Pine.SOL.3.91.960128110843.26154C-100000@hamlet.uncg.edu> <4ein9b$330@news.bellglobal.com>
- NNTP-Posting-Host: keats.ugrad.cs.ubc.ca
-
- In article <4ein9b$330@news.bellglobal.com>,
- Steve Tupy <stupy@freenet.durham.org> wrote:
- >: I wrote a DOS program, with size about 512K, which is the size when I use
- >: dir to check it.
- >:
- > This may not be totally reliable...
- >
- >: I also use dos command mem to check dos avaible memory, which gave me max
- >: execute file size 560k, because there are some device driver I load in
- >: the config.sys, I knew 512k is not real program size, there are some
- >: stack variable and memory allocated on heap.
- >
- >: Now my program is not stable, it can work for fine for couple days, then
- >: it lock up. It may just lock up keyboard or screen, it may also reboot
- >: PC, I wonder if because it close to 640 K limit ? How can I check the real
- >: program size include stack size and heap ?
- >
- > Because DOS is not reentrant, if you intend to run it for any length
- >of time WITH memory allocation/deallocation, you almost MUST implement some
- >sort of garbage collection scheme to "defrag" your memory segments. This is
- >why you see all these BBS packages dumping out of memory and loading up
- >again from the command line, it flushes the memory back to the original
- >point, in a manner of speaking. You can either avoid memory allocation,
- >implement a garbage collector or reload your program from time to time,
- >these are about the only ways to deal with this problem that I know of...
-
- This is all true, but it has nothing to do with reentrancy (nor comp.lang.c)
- and everything to do with lack of virtual memory management to eliminate
- external fragmentation and to provide address spaces that are larger than what
- is physically available.
-
- These problems were effectively solved long before DOS, so the latter is not
- worth discussing, really.
-
- Regarding fragmentation, there is an article in an old issue of the Journal of
- the ACM (something like vol. 18). It contains a proof that if a requestor
- allocates and deallocates blocks that have power of two sizes from 1 to 2^b,
- and at no time allocates more than N bytes, the worst case scenario will
- require something like N * (1 + b) bytes; that is, the allocator plays the
- optimal strategy to maximize the memory footprint.
-
- This means that if you are grabbing blocks between 1 byte and 4K in size up to
- a limit of 1MB total, you could end up claiming as much as 13MB of storage and
- not be able to fill a 4K allocation request without requiring additional memory
- from the OS. Rather pessimistic!
-
- Also note that if your VM scheme is based on 4K pages, in this case you can't
- even return a single page to the OS, because if you could, that would mean you
- have a "hole" in your heap that is at least 4K large, and you _could_ satisfy
- the largest possible allocation request of 4K without increasing the heap.
-
- Mind you, J. M. Robson (the article's author) only proved the result for block
- sizes that are dyadic powers.
- --
-
-